In 1890, William James, in his textbook Principles of Psychology, remarked:
“ | Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.[1] | ” |
Other authors have argued against this approach, claiming that 'There is no such thing as attention' [2]. They argue that much research studying attention makes the mistake of "treating it as a cause, when it is an effect" [3], and that attention cannot and should not be studied independent of the specific perceptual or other cognitive processes that are influenced by the presence or absence of attention.
Over the past hundred years attention, loosely defined as the cognitive process of paying attention to one aspect of the environment while ignoring others, has become one of the most intensely studied topics within psychology and cognitive neuroscience. Attention remains a major area of investigation within education, psychology and neuroscience. Areas of active investigation involve determining the source of the signals that generate attention, the effects of these signals on the tuning properties of sensory neurons, and the relationship between attention and other cognitive processes like working memory. Another ongoing line of research involves studying how attention is disrupted early in the development of different disorders, and how abnormalities in the allocation of attention can affect subsequent learning[4].
Contents |
In William James' time, the method more commonly used to study attention was introspection. However, as early as 1858, Franciscus Donders used mental chronometry to study attention and it was considered a major field of intellectual inquiry by authors such as Sigmund Freud. One major debate in this period was whether it was possible to attend to two things at once (split attention). Walter Benjamin described this experience as "reception in a state of distraction." This disagreement could only be resolved through experimentation.
In the 1950s, research psychologists renewed their interest in attention when the dominant epistemology shifted from positivism (i.e., behaviorism) to realism during what has come to be known as the "cognitive revolution".[5] The cognitive revolution admitted unobservable cognitive processes like attention as legitimate objects of scientific study.
Modern research on attention began with the analysis of the "cocktail party problem" by Colin Cherry in 1953. At a cocktail party how do people select the conversation that they are listening to and ignore the rest? This problem is at times called "focused attention", as opposed to "divided attention". Cherry performed a number of experiments which became known as dichotic listening and were extended by Donald Broadbent and others.[6] In a typical experiment, subjects would use a set of headphones to listen to two streams of words in different ears and selectively attend to one stream. After the task, the experimenter would question the subjects about the content of the unattended stream. Experiments by Gray and Wedderburn and later Anne Treisman pointed out various problems in Broadbent's early model and eventually led to the Deutsch-Norman model in 1968. In this model, no signal is filtered out, but all are processed to the point of activating their stored representations in memory. The point at which attention becomes "selective" is when one of the memory representations is selected for further processing. At any time, only one can be selected, resulting in the attentional bottleneck.[7]
This debate became known as the early-selection vs late-selection models. In the early selection models (first proposed by Donald Broadbent and Anne Treisman), attention shuts down or attenuates processing in the unattended ear before the mind can analyze its semantic content. In the late selection models (first proposed by J. Anthony Deutsch and Diana Deutsch), the content in both ears is analyzed semantically, but the words in the unattended ear cannot access consciousness.[8] This debate has still not been resolved.
Anne Treisman developed the highly influential feature integration theory.[9] According to this model, attention binds different features of an object (e.g., color and shape) into consciously experienced wholes. Although this model has received much criticism, it is still widely cited and spawned similar theories with modification, such as Jeremy Wolfe's Guided Search Theory.[10]
In the 1960s, Robert Wurtz at the National Institutes of Health began recording electrical signals from the brains of macaques who were trained to perform attentional tasks. These experiments showed for the first time that there was a direct neural correlate of a mental process (namely, enhanced firing in the superior colliculus).
In the 1990s, psychologists began using PET and later fMRI to image the brain in attentive tasks. Because of the highly expensive equipment that was generally only available in hospitals, psychologists sought for cooperation with neurologists. Pioneers of brain imaging studies of selective attention are psychologist Michael I. Posner (then already renowned for his seminal work on visual selective attention) and neurologist Marcus Raichle. Their results soon sparked interest from the entire neuroscience community in these psychological studies, which had until then focused on monkey brains. With the development of these technological innovations neuroscientists became interested in this type of research that combines sophisticated experimental paradigms from cognitive psychology with these new brain imaging techniques. Although the older technique of EEG had long been used to study the brain activity underlying selective attention by cognitive psychophysiologists, the ability of the newer techniques to actually measure precisely localized activity inside the brain generated renewed interest by a wider community of researchers. The results of these experiments have shown a broad agreement with the psychological, psychophysiological and the experiments performed on monkeys.
In cognitive psychology there are at least two models which describe how visual attention operates. These models may be considered loosely as metaphors which are used to describe internal processes and to generate hypotheses that are falsifiable. Generally speaking, visual attention is thought to operate as a two-stage process.[11] In the first stage, attention is distributed uniformly over the external visual scene and processing of information is performed in parallel. In the second stage, attention is concentrated to a specific area of the visual scene (i.e. it is focused), and processing is performed in a serial fashion.
The first of these models to appear in the literature is the spotlight model. The term "spotlight" was first used by David LaBerge,[12] and was inspired by the work of William James who described attention as having a focus, a margin, and a fringe.[13] The focus is an area that extracts information from the visual scene with a high-resolution, the geometric center of which being where visual attention is directed. Surrounding the focus is the fringe of attention which extracts information in a much more crude fashion (i.e. low-resolution). This fringe extends out to a specified area and this cut-off is called the margin.
The second model is called the zoom-lens model, and was first introduced in 1983.[14] This model inherits all properties of the spotlight model (i.e. the focus, the fringe, and the margin) but has the added property of changing in size. This size-change mechanism was inspired by the zoom lens you might find on a camera, and any change in size can be described by a trade-off in the efficiency of processing.[15] The zoom-lens of attention can be described in terms of an inverse trade-off between the size of focus and the efficiency of processing: because attentional resources are assumed to be fixed, then it follows that the larger the focus is, the slower processing will be of that region of the visual scene since this fixed resource will be distributed over a larger area. It is thought that the focus of attention can subtend a minimum of 1° of visual angle,[13][16] however the maximum size has not yet been determined.
Attention Researchers have described two different aspects of how our minds select items present in the environment to attend to.
The first aspect is called bottom-up processing, also known as stimulus-driven attention or exogenous attention. These describe the aspects of our attentional processing that are thought to be driven by the properties of the objects themselves. These aspects of attention are thought to involve parietal and temporal cortices, as well as the brainstem[17]. Certain aspects of an object's properties, such as motion or a sudden loud noise, have the capacity to attract our attention in a pre-conscious, or non-volitional way. We attend to them whether we want to or not[18].
The second aspects is called top-down processing, also known as goal-driven, endogenous attention, attentional control or executive attention. This refers to those aspects of our attentional orienting which are under the control of the person who is attending. In the brain, it is thought to be mediated primarily by the frontal cortex and basal ganglia[17].
Attentional control, also known as endogenous or executive attention, refers to our capacity to choose what we pay attention to and what we ignore[19]. It is considered one of the executive functions[20], mediated primarily by frontal areas of the brain[17]. Subcomponents of attentional control include conflict resolution and inhibition[21]. Individuals' capacity to exercise attentional control has been shown to relate to other aspects of the executive functions, such as working memory[22].
Attention may be differentiated according to its status as "overt" versus "covert".[23] Overt attention is the act of directing sense organs towards a stimulus source. Covert attention is the act of mentally focusing on one of several possible sensory stimuli. Covert attention is thought to be a neural process that enhances the signal from a particular part of the sensory panorama. (e.g. While reading, shifting overt attention would amount to movement of eyes to read different words, but covert attention shift would occur when you shift your focus from semantic processing of word to the font or color of the word you are reading.)
There are studies that suggest the mechanisms of overt and covert attention may not be as separate as previously believed. Though humans and primates can look in one direction but attend in another, there may be an underlying neural circuitry that links shifts in covert attention to plans to shift gaze. For example, if individuals attend to the right hand corner field of view, movement of the eyes in that direction may have to be actively suppressed.
The current view is that visual covert attention is a mechanism for quickly scanning the field of view for interesting locations. This shift in covert attention is linked to eye movement circuitry that sets up a slower saccade to that location.
One theory regarding selective attention is the load theory, which states that there are two mechanisms that affect attention: cognitive and perceptual. The perceptual considers the subject’s ability to perceive or ignore stimuli, both task-related and non task-related. Studies show that if there are many stimuli present (especially if they are task-related), it is much easier to ignore the non-task related stimuli, but if there are few stimuli the mind will perceive the irrelevant stimuli as well as the relevant. The cognitive refers to the actual processing of the stimuli, studies regarding this showed that the ability to process stimuli decreased with age, meaning that younger people were able to perceive more stimuli and fully process them, but were likely to process both relevant and irrelevant information, while older people could process fewer stimuli, but usually processed only relevant information.[24]
Some people can process multiple stimuli, e.g. trained morse code operators have been able to copy 100% of a message while carrying on a meaningful conversation. This relies on the reflexive response due to "overlearning" the skill of morse code reception/detection/transcription so that it is an autonomous function requiring no specific attention to perform.
Most experiments show that one neural correlate of attention is enhanced firing. If a neuron has a certain response to a stimulus when the animal is not attending to the stimulus, then when the animal does attend to the stimulus, the neuron's response will be enhanced even if the physical characteristics of the stimulus remain the same.
In a recent review, Knudsen[25] describes a more general model which identifies four core processes of attention, with working memory at the center:
Neurally, at different hierarchical levels spatial maps can enhance or inhibit activity in sensory areas, and induce orienting behaviors like eye movement.
In many cases attention produces changes in the EEG. Many animals, including humans, produce gamma waves (40–60 Hz) when focusing attention on a particular object or activity.[27]
|